52 research outputs found

    SMA Technical Report

    Get PDF
    Technical report from pilot studies in the Sensing Music-related Actions group. The report presents simple motion sensor technology and issues regarding pre-processing of music-related motion data. In cognitive music research, ones main focus is the relationship between music and human beings. This involves emotions, moods, perception, expression, interaction with other people, interaction with musical instruments and other interfaces, among many other things. Due to the nature of music as a subjective experience, verbal utterances on these aspects tend to be coloured by the person who makes them. Such utterances are limited by the vocabulary of the person, and by the process of consciously transforming these inner feelings and experiences to words (Leman 2007: 5f). Thus, gesture research has become extensively popular among researchers wanting a deeper understanding of how people interact with music. In this kind of research, several different methods are used, using for example infrared-sensitive cameras (Wiesendanger et al. 2006) or video recordings in combination with MIDI (Jabusch 2006). This paper presents methods being used in a pilot study for the Sensing Music-related Actions project at the Department of Musicology and the Department of Informatics at the University of Oslo. Here I will discuss the methods for apprehending and analysing gestural data in this project, especially looking into use of sensors for measuring movement and tracking absolute position. In this project, a superior goal is to develop methods for studying gestures in musical performance. In a large view this involves gathering data, analysing the data and organizing the data in such a way that we ourselves and others easily can find and understand the data

    Der æ so vent å vestoheio – Intonasjon i et gammelstev fra Setesdal

    Get PDF
    The intonation patterns in traditional Norwegian folk songs have been de-scribed and measured in various ways for more than a hundred years. This article provides a historical summary of research in this area and introduces a new software for measuring tone heights. This is exemplified through our analysis of an unaccompanied folk song, «Der æ so vent å vestoheio», recorded by the Norwegian Broadcasting Corporation (NRK) in 1951, performed by Gro Heddi Brokke (1910–1997) from the valley of Setesdal. The 5th and octave scale degrees stand out as the most stable throughout the tune, with a lot of variation in the thirds, sixths, and even the tonic. In spite of this vari-ation, the performance comes forward as both confident and stable; the vary-ing intonations appear controlled – they are not performer mistakes. Still, our findings suggest that tones of longer duration seem to vary less in into-nation than shorter notes. We show how our software can be used in combi-nation with manual analysis, and argue that automated pitch analysis may be useful also in the analysis of larger collections of Norwegian folk music

    Developing the Dance Jockey system for musical interaction with the Xsens MVN suit

    Get PDF
    In this paper we present the Dance Jockey System, a system developed for using a full body inertial motion capture suit (Xsens MVN) in music/dance performances. We present different strategies for extracting relevant postures and actions from the continuous data, and how these postures and actions can be used to control sonic and musical features. The system has been used in several public performances, and we believe it has great potential for further exploration. However, to overcome the current practical and technical challenges when working with the system, it is important to further refine tools and software in order to facilitate making of new performance pieces. Proceedings of the 12th International Conference on New Interfaces for Musical Expression. University of Michigan Press 2012 ISBN 978-0-9855720-1-3

    Enabling Participants to Play Rhythmic Solos Within a Group via Auctions

    Get PDF
    The paper presents the interactive music system SoloJam, which allows a group of participants with little or no musical training to effectively play together in a ``band-like'' setting. It allows the participants to take turns playing solos made up of rhythmic pattern sequences. We specify the issue at hand for allowing such participation as being the requirement of decentralised coherent circulation of playing solos. This is to be realised by some form of intelligence within the devices used for participation. Here we take inspiration from the Economic Sciences, and propose this intelligence to take the form of making devices possessing the capability of evaluating their utility of playing the next solo, the capability of holding auctions, and of bidding within them. We show that holding auctions and bidding within them enables decentralisation of co-ordinating solo circulation, and a properly designed utility function enables coherence in the musical output. The approach helps achieve decentralised coherent circulation with artificial agents simulating human participants. The effectiveness of the approach is further supported when human users participate. As a result, the approach is shown to be effective at enabling participants with little or no musical training to play together in SoloJam

    Embodying an Interactive AI for Dance Through Movement Ideation

    Full text link
    What expectations exist in the minds of dancers when interacting with a generative machine learning model? During two workshop events, experienced dancers explore these expectations through improvisation and role-play, embodying an imagined AI-dancer. The dancers explored how intuited flow, shared images, and the concept of a human replica might work in their imagined AI-human interaction. Our findings challenge existing assumptions about what is desired from generative models of dance, such as expectations of realism, and how such systems should be evaluated. We further advocate that such models should celebrate non-human artefacts, focus on the potential for serendipitous moments of discovery, and that dance practitioners should be included in their development. Our concrete suggestions show how our findings can be adapted into the development of improved generative and interactive machine learning models for dancers’ creative practice

    The Nymophone2 : a study of a new multidimensionally controllable musical instrument

    Get PDF
    This thesis presents documentation, theory and evaluation of the musical instrument "Nymophone2". The Nymophone2 was made as the practical part of this two-part master's project. The main goal of the thesis is to investigate complexity in instrument control for a multidimensionally controllable musical instrument. To investigate this I refer to Pierre Schaeffer's theory of the Sonic object, Curt Sachs and Erich M. von Hornbostel's as well as Herbert Heyde's systems for instrument description and classification, Tellef Kvifte's system for description of playing technique, and current research on music-related movement. I present acoustic and some psycho-acoustic features of the Nymophone2, and present playing technique of the instrument in light of the above mentioned theory and GDIF development

    A setup for synchronizing GDIF data using SDIF-files and FTM for Max

    Get PDF
    The purpose of this Short-Term Scientific Mission has been to investigate the use of the Sound Description Interchange Format (SDIF) as a container format for Gesture Description Interchange Format (GDIF) recordings. Much of my work has been related to learning the equipment at the hosting laboratory, and to make the equipment capable of communicating with the software I have been using

    Methods and Technologies for Analysing Links Between Musical Sound and Body Motion

    Get PDF
    There are strong indications that musical sound and body motion are related. For instance, musical sound is often the result of body motion in the form of sound-producing actions, and muscial sound may lead to body motion such as dance. The research presented in this dissertation is focused on technologies and methods of studying lower-level features of motion, and how people relate motion to sound. Two experiments on so-called sound-tracing, meaning representation of perceptual sound features through body motion, have been carried out and analysed quantitatively. The motion of a number of participants has been recorded using stateof- the-art motion capture technologies. In order to determine the quality of the data that has been recorded, these technologies themselves are also a subject of research in this thesis. A toolbox for storing and streaming music-related data is presented. This toolbox allows synchronised recording of motion capture data from several systems, independently of systemspecific characteristics like data types or sampling rates. The thesis presents evaluations of four motion tracking systems used in research on musicrelated body motion. They include the Xsens motion capture suit, optical infrared marker-based systems from NaturalPoint and Qualisys, as well as the inertial sensors of an iPod Touch. These systems cover a range of motion tracking technologies, from state-of-the-art to low-cost and ubiquitous mobile devices. Weaknesses and strengths of the various systems are pointed out, with a focus on applications for music performance and analysis of music-related motion. The process of extracting features from motion data is discussed in the thesis, along with motion features used in analysis of sound-tracing experiments, including time-varying features and global features. Features for realtime use are also discussed related to the development of a new motion-based musical instrument: The SoundSaber. Finally, four papers on sound-tracing experiments present results and methods of analysing people’s bodily responses to short sound objects. These papers cover two experiments, presenting various analytical approaches. In the first experiment participants moved a rod in the air to mimic the sound qualities in the motion of the rod. In the second experiment the participants held two handles and a different selection of sound stimuli was used. In both experiments optical infrared marker-based motion capture technology was used to record the motion. The links between sound and motion were analysed using four approaches. (1) A pattern recognition classifier was trained to classify sound-tracings, and the performance of the classifier was analysed to search for similarity in motion patterns exhibited by participants. (2) Spearman’s p correlation was applied to analyse the correlation between individual sound and motion features. (3) Canonical correlation analysis was applied in order to analyse correlations between combinations of sound features and motion features in the sound-tracing experiments. (4) Traditional statistical tests were applied to compare sound-tracing strategies between a variety of sounds and participants differing in levels of musical training. Since the individual analysis methods provide different perspectives on the links between sound and motion, the use of several methods of analysis is recommended to obtain a broad understanding of how sound may evoke bodily responses

    The Challenge of Decentralised Synchronisation in Interactive Music Systems

    Get PDF
    Synchronisation is an important part of collaborative music systems, and with such systems implemented on mobile devices, the implementation of algorithms for synchronisation without central control becomes increasingly important. Decentralised synchronisation has been researched in many areas, and some challenges are solved. However, some of the assumptions that are often made in this research are not suitable for mobile musical systems. We present an implementation of a firefly-inspired algorithm for synchronisation of musical agents with fixed and equal tempo, and lay out the road ahead towards synchronisation between agents with large differences in tempo. The effect of introducing human-controlled nodes in the network of otherwise agent-controlled nodes is examined. © 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works
    • …
    corecore